Continuing the thread initiated on December 22nd, titled “We envision a world where everyone, no matter their profession…” can have immediate access to dozens of experts”, and building upon a more recent post on January 23rd regarding the “omnipresent” and nearly omniscient artificial intelligence model, ChatGPT-3, and its intriguing insights into The Value of Life for an Elderly Person in a Coma, I am now presenting the findings of a search conducted today on the Scopus database.
Among the over 80 million scientific publications indexed on the platform, a noteworthy publication by a researcher at the University of Manchester has surfaced. The paper, titled “Open Artificial Intelligence Platforms in Nursing Education: Tools for Academic Progress or Abuse?” features ChatGPT-3 as the second author.
In light of this pivotal and consequential development, I am revisiting the three questions I posed in a previous post on August 14, 2020, where I commented on an article in The Economist that highlighted the AI model’s ability to produce texts indistinguishable from human-authored content for the first time:
“let´s imagine that a human improved a text generated by GPT-3, can we really say that the improved text has two co-authors ? Or should it be that GPT-3 is the only real author and that the human contribution was a minor one and needs only to be mentioned in the acknowledgment section ? But taking into account the intrinsic human narcissism (especially in the academic field, as per Brunell et al. (2011) findings), is it possible to believe that humans are capable of admitting that they had a minimal contribution to an article and that the merit belongs exclusively to GPT-3?” https://pacheco-torgal.blogspot.com/2020/08/is-artificial-intelligence-leading-us.html
PS – On page 56 of The Economist, in the issue dated 4-10 February 2023, an article on the AI race discusses the challengers of ChatGPT, described as “the fastest-growing consumer application in history.” The accompanying image (No. 3) reveals that Amazon and Meta “respectively produce two-thirds and four-fifths as much AI research as Stanford University… Alphabet and Microsoft churn out considerably more.” This raises the pertinent question: In a world rife with research misconduct, how can we trust a corporation’s research results (Consider the precedents of Nikola, Theranos, or Volkswagen corporate frauds), especially when negative outcomes could precipitate a sharp decline in stock value or even bankruptcy?